Deep Deterministic Policy Gradient

您所在的位置:网站首页 pytorch policy gradient Deep Deterministic Policy Gradient

Deep Deterministic Policy Gradient

2024-06-30 16:23| 来源: 网络整理| 查看: 265

Background露

(Previously: Introduction to RL Part 1: The Optimal Q-Function and the Optimal Action)

Deep Deterministic Policy Gradient (DDPG) is an algorithm which concurrently learns a Q-function and a policy. It uses off-policy data and the Bellman equation to learn the Q-function, and uses the Q-function to learn the policy.

This approach is closely connected to Q-learning, and is motivated the same way: if you know the optimal action-value function , then in any given state, the optimal action can be found by solving

DDPG interleaves learning an approximator to with learning an approximator to , and it does so in a way which is specifically adapted for environments with continuous action spaces. But what does it mean that DDPG is adapted specifically for environments with continuous action spaces? It relates to how we compute the max over actions in .

When there are a finite number of discrete actions, the max poses no problem, because we can just compute the Q-values for each action separately and directly compare them. (This also immediately gives us the action which maximizes the Q-value.) But when the action space is continuous, we can’t exhaustively evaluate the space, and solving the optimization problem is highly non-trivial. Using a normal optimization algorithm would make calculating a painfully expensive subroutine. And since it would need to be run every time the agent wants to take an action in the environment, this is unacceptable.

Because the action space is continuous, the function is presumed to be differentiable with respect to the action argument. This allows us to set up an efficient, gradient-based learning rule for a policy which exploits that fact. Then, instead of running an expensive optimization subroutine each time we wish to compute , we can approximate it with . See the Key Equations section details.

Quick Facts露 DDPG is an off-policy algorithm. DDPG can only be used for environments with continuous action spaces. DDPG can be thought of as being deep Q-learning for continuous action spaces. The Spinning Up implementation of DDPG does not support parallelization. Key Equations露

Here, we’ll explain the math behind the two parts of DDPG: learning a Q function, and learning a policy.

The Q-Learning Side of DDPG露

First, let’s recap the Bellman equation describing the optimal action-value function, . It’s given by

where is shorthand for saying that the next state, , is sampled by the environment from a distribution .

This Bellman equation is the starting point for learning an approximator to . Suppose the approximator is a neural network , with parameters , and that we have collected a set of transitions (where indicates whether state is terminal). We can set up a mean-squared Bellman error (MSBE) function, which tells us roughly how closely comes to satisfying the Bellman equation:

Here, in evaluating , we’ve used a Python convention of evaluating True to 1 and False to zero. Thus, when d==True—which is to say, when is a terminal state—the Q-function should show that the agent gets no additional rewards after the current state. (This choice of notation corresponds to what we later implement in code.)

Q-learning algorithms for function approximators, such as DQN (and all its variants) and DDPG, are largely based on minimizing this MSBE loss function. There are two main tricks employed by all of them which are worth describing, and then a specific detail for DDPG.

Trick One: Replay Buffers. All standard algorithms for training a deep neural network to approximate make use of an experience replay buffer. This is the set of previous experiences. In order for the algorithm to have stable behavior, the replay buffer should be large enough to contain a wide range of experiences, but it may not always be good to keep everything. If you only use the very-most recent data, you will overfit to that and things will break; if you use too much experience, you may slow down your learning. This may take some tuning to get right.

You Should Know

We’ve mentioned that DDPG is an off-policy algorithm: this is as good a point as any to highlight why and how. Observe that the replay buffer should contain old experiences, even though they might have been obtained using an outdated policy. Why are we able to use these at all? The reason is that the Bellman equation doesn’t care which transition tuples are used, or how the actions were selected, or what happens after a given transition, because the optimal Q-function should satisfy the Bellman equation for all possible transitions. So any transitions that we’ve ever experienced are fair game when trying to fit a Q-function approximator via MSBE minimization.

Trick Two: Target Networks. Q-learning algorithms make use of target networks. The term

is called the target, because when we minimize the MSBE loss, we are trying to make the Q-function be more like this target. Problematically, the target depends on the same parameters we are trying to train: . This makes MSBE minimization unstable. The solution is to use a set of parameters which comes close to , but with a time delay—that is to say, a second network, called the target network, which lags the first. The parameters of the target network are denoted .

In DQN-based algorithms, the target network is just copied over from the main network every some-fixed-number of steps. In DDPG-style algorithms, the target network is updated once per main network update by polyak averaging:

where is a hyperparameter between 0 and 1 (usually close to 1). (This hyperparameter is called polyak in our code).

DDPG Detail: Calculating the Max Over Actions in the Target. As mentioned earlier: computing the maximum over actions in the target is a challenge in continuous action spaces. DDPG deals with this by using a target policy network to compute an action which approximately maximizes . The target policy network is found the same way as the target Q-function: by polyak averaging the policy parameters over the course of training.

Putting it all together, Q-learning in DDPG is performed by minimizing the following MSBE loss with stochastic gradient descent:

where is the target policy.

The Policy Learning Side of DDPG露

Policy learning in DDPG is fairly simple. We want to learn a deterministic policy which gives the action that maximizes . Because the action space is continuous, and we assume the Q-function is differentiable with respect to action, we can just perform gradient ascent (with respect to policy parameters only) to solve

Note that the Q-function parameters are treated as constants here.

Exploration vs. Exploitation露

DDPG trains a deterministic policy in an off-policy way. Because the policy is deterministic, if the agent were to explore on-policy, in the beginning it would probably not try a wide enough variety of actions to find useful learning signals. To make DDPG policies explore better, we add noise to their actions at training time. The authors of the original DDPG paper recommended time-correlated OU noise, but more recent results suggest that uncorrelated, mean-zero Gaussian noise works perfectly well. Since the latter is simpler, it is preferred. To facilitate getting higher-quality training data, you may reduce the scale of the noise over the course of training. (We do not do this in our implementation, and keep noise scale fixed throughout.)

At test time, to see how well the policy exploits what it has learned, we do not add noise to the actions.

You Should Know

Our DDPG implementation uses a trick to improve exploration at the start of training. For a fixed number of steps at the beginning (set with the start_steps keyword argument), the agent takes actions which are sampled from a uniform random distribution over valid actions. After that, it returns to normal DDPG exploration.

Pseudocode露



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3